Market Roundup

March 22, 2002

 

This Week

 

Network Associates to Buy Out McAfee.com: Is the Tail Wagging the Dog?

 

IBM/DOE Announce Nationwide Science Grid Collaboration

 

Intel Introduces Dual Processor Blade Products

 

Creating a More Valuable Novell?

 

Oracle “Unbreakable” Claim Challenged by CERT: 37 Security Holes Found

 

Airlines Eliminate Travel Agent Commissions: A Lesson for the Enterprise

 

Network Associates to Buy Out McAfee.com: Is the Tail Wagging the Dog?

By Jim Balderston

Network Associates announced this week that it is seeking to acquire the remaining outstanding stock of McAfee.com, an NAI spinout that went public in 1999. Network Associates presently owns approximately 80% of McAfee.com stock, and has offered $18.64 per share for outstanding shares, paying a total of $220 million or so.  Network Associates said it wants to bring the McAfee.com entity back under Network Associates control to end confusion over brands. McAfee.com, with 1.2 million paid subscribers, offers antivirus, firewalls, and other consumer security products online, and does not produce boxed software. Network Associates still sells McAfee-branded antivirus software as a boxed product to consumers, but it also sells server-based antivirus software to enterprise customers. Network Associates has been trimming product offerings, selling off its firewall product lines and attempting to sell its PGP encryption software to a third party as well. The McAfee.com Board of Directors has not made any public comment about the Network Associates proposal, but has asked outside consultants to appraise the offer.

Back when Network Associates proposed the idea of spinning out McAfee.com, we wondered just how the two companies would resolve the competing – and sometimes confusing—brands of McAfee.com and McAfee Software. Clearly this confusion has not been resolved as nearly every McAfee boxed product customer heads to the McAfee.com web site whenever there is a major virus scare. Still, we have to wonder if Network Associates is approaching this problem bass-ackwards.

Despite the brand confusion issue, there is substantial evidence that the McAfee.com experiment has been notably successful. With more than a million paying subscribers using a relatively new means of software distribution, one could argue that McAfee.com is on to something. We would certainly make that argument, and give that something the name of “Service Computing,” which includes the idea that software can be sold, delivered and updated as a service, without the need for sneaker networks or IT intervention. In this sense, McAfee.com is well along the path to a future that the boxed software folks at Network Associates can only fear. While we cannot speculate as to how Network Associates would integrate McAfee.com into its existing largely enterprise-focused corporate infrastructure, we can say the following: It is not likely that Network Associates would have built a base of 1.2 million consumer subscribers if this project had been left in-house. Nor would the company have sunk the substantial resources needed into the bandwidth, storage, and other infrastructure that allows McAfee.com users to get to the site when new viruses break out. Instead, we suspect the company would have stuck with the old boxed software model, believing that in crunch time, inertia wins. As a politically incorrect but viable alternative, we would argue that instead of reabsorbing its former spin-off, Network Associates should put its all-consumer retail antivirus products under the wing of McAfee.com, and let the company that understands tomorrow’s consumer marketplace lead the charge. By doing so McAfee.com could actually extend the McAfee brand well into the future, instead of relegating it to collecting dust on the shelf of software stores nationwide.

 

IBM/DOE Announce Nationwide Science Grid Collaboration

By Charles King

IBM and the U.S. Department of Energy’s (DOE) National Energy Research Scientific Computing Center (NERSC) have announced a collaboration to begin deploying the first systems on a nationwide computing Grid. The technologies that will be included initially in the DOE Science Grid include systems located at NERSC’s headquarters, DOE’s Lawrence Berkeley National Laboratory, include a 3,328-processor IBM supercomputer; a 160-processor IBM Netfinity cluster system; and NERSC’s HPSS (High Performance Storage System), an archival storage system with a capacity of 1.4 petabytes that is managed with IBM servers. The DOE claims the Science Grid will offer scientists across the country access to supercomputers in the same way an electrical Grid provides access to widely dispersed power sources. The system will provide the high-end data and computing power necessary for large-scale global warming, genomic and astrophysics research projects. NERSC had originally planned to make its systems available via the Science Grid by 2004, but the collaboration between IBM and DOE will allow a core group of users to begin accessing data later this year.

Grid computing, which has been described as the basis of an “intelligent” Web, is a layer of software and services that links collections of systems together, allowing them to share resources such as data, applications, storage, and raw computing power. As a result, Grid users can collaborate and pursue projects in ways that would be difficult or impossible outside of a small handful of research labs and universities. In fact, at its heart the IBM/DOE collaboration is all about using Grid solutions to leverage the government’s high-end computing resources across a wider population of scientists and geographic locations, and illuminates why the lion’s share of Grid deployments have occurred in research labs and universities. The fact of the matter is that the kind of hardware NERSC is bringing to the Grid table is hugely complicated and expensive, and tends not to be used to its fullest capacity if accessed only by personnel onsite. A Grid deployment such as the one proposed by IBM and the DOE would not only allow larger numbers of scientists to pursue more highly complex and collaborative projects, but should also help ensure that the DOE’s technology investment delivers maximum returns.

Does this mean that research labs constitute the best place for Grid? Hardly. In fact, technology history reveals a host of solutions (including the Internet) created by and for the scientific community that were later developed into successful consumer and business applications. Overall, we believe that the benefits Grid offers, including more effective problem solving and collaboration, can be as valuable to business communities as they are among scientists. However, we also believe that for the time being Grid solutions will be more at home in research facilities than in wider commercial deployments, at least until issues including standards integration, security, access/authentication, and software property rights are hashed out satisfactorily. That said, Grid offers a tantalizing infrastructure model for what we call Service Computing, the delivery of computing power and application access in a model similar to that of common utilities such as electricity and water. For Service Computing to become a reality, it will require the same sort of robust, flexible, highly automated, and autonomic infrastructure that IBM and the DOE are planning for their collaboration. To that end, we believe the DOE Science Grid constitutes a significant step towards a Service Computing future.

 

Intel Introduces Dual Processor Blade Products

By Charles King

Intel has announced the immediate availability of dual processing capabilities in the company’s thin, low power “ultra dense,” i.e., the largest possible number of processors within narrow size and thermal requirements, blade server products. Intel’s new dual systems feature a pair of Low Voltage Intel Pentium III 800MHz processors, each with 512KB of on-chip Level 2 cache and support for up to 4GB of RAM, and they are the first Intel-based “ultra dense” dual processing chips to support PC 133 SDRAM memory and a 133MHz system bus. Dual processor motherboards based on the chips are available from companies including Force Computers, I-Bus/Phoenix and Kontron, and are aimed at applications in the communications, transportation, automation, medical, and military markets. Intel expects blade server systems based on the products to be available later this year from Dell, Fujitsu/Siemens, and other major OEMs. No pricing information was included in the announcement. 

Blade servers have become hot news over the past few months, and with good reason. Given the lagging economy and volatile electricity costs of the past year, businesses have become increasingly sensitive to a range of TCO issues, especially those touching operational expenses. Blade servers’ space, power, and cooling efficiency are all regarded as welcome news by financially sensitive organizations, but to our way of thinking, the flexibility these products offer is as beneficial as their overall thriftiness. Blade server architecture provides rapid deployment and scalability options, allowing enterprises to quickly grow or reconfigure systems to fit prevailing needs. Additionally, a host of flexible server management applications can help increase or extend blade servers’ overall efficiency. Add that to the fact that Intel’s x86 architecture can support either Windows or Linux operating systems and data center applications, and it is easy to see the inspiration behind blade servers’ increasing popularity. Given these issues and the likely performance improvements offered by dual processor configurations, we believe Intel’s new blade server systems are likely to be greeted fondly by the company’s OEM partners. Along with the vendors mentioned in the press release, we will not be surprised to see new system announcements from Intel-based blade server vendors including Compaq, HP, and RLX Technologies (a strategic ally of IBM). Additionally, we believe this announcement offers further evidence (if any were needed) of Intel’s intention of becoming the processor vendor of choice on platforms well beyond the desktop.

 

Creating a More Valuable Novell?

By Siamanto

Novell has announced the upcoming public beta availability of its eDirectory 8.7. The release includes the introduction of both Web-based and wireless directory administration, and expanded developer tools including a to-be-released UDDI add-on set of services. Novell's clients currently use eDirectory to create management frameworks and repositories for user identities, access rights, platforms, and network resources. The centralized management capability of eDirectory is positioned to lower administrative and support costs, and to provide a platform for business solutions in areas such as provisioning, user access control, application access, and desktop management.

If Metcalfe’s law of networking still holds, people and things become inherently more valuable as one increases the number of people and things with which they are associated. Novell’s announcement includes an alphabet soup of protocols and standards that eDirectory would embrace — SOAP, UDDI, TLS, XML, and LDAP — and we are left to wonder if the company is trying to be all things to all people. With UDDI, the interconnects increase in manifold ways, thus driving markedly higher the value of being within a given directory. What is clear though is that Novell is planting itself firmly in the web services space by leveraging its directory technology. Of course, Novell’s technology has rarely been in question — just its marketing and execution ability. In this vein, the company has appointed a CMO and streamlined the ranks of its product and marketing organizations. We trust that the pruning will lead to growth that is more vigorous.

Novell’s experience with providing secure and scalable directory services certainly puts the company into the technical running. Moreover, increasing the ability of other applications and services to leverage the contents of the directory enhances the overall value of its offering. In a climate where IT budgets are tight and customers are conservative, Novell must provide a set of offerings that not only have technical merit but true business value. In the final analysis, Novell’s attempt at positioning their directory product as a platform for Web Services and identity management would provide a great deal of value to its customer base. The question is: are customers listening?

 

Oracle “Unbreakable” Claim Challenged by CERT: 37 Security Holes Found

By Jim Balderston

Late last week CERT released an advisory warning that Oracle products have multiple vulnerabilities including buffer overflows, insecure default settings, failures to enforce access controls, and a failure to validate input. The impacts of these vulnerabilities include the execution of arbitrary commands or code, denial of service, and unauthorized access to sensitive information, according to CERT experts. The CERT advisory covers systems running Oracle 9i Application Server and those running the Oracle 9i and 8i databases. Oracle has released a statement saying that no Oracle customers have reported problems relating to the issues raised by CERT.

Certainly Oracle has brought some of this scrutiny from CERT and other security experts upon itself with its claims beginning last year that Oracle databases and application servers are “unbreakable.” Oracle CEO Larry Ellison – never one to be shy in marketing his products – has touted his companies with this one-word tagline and continues to do so in advertisements in leading publications including the Wall Street Journal. Oracle even goes so far as to promise to make Microsoft products “unbreakable” when running Oracle databases on top of the Microsoft servers. Perhaps so, probably not.

While taking potshots at Ellison’s enthusiastic promotion of his company’s products is an old — and easy — form of sport, there is no doubt that Ellison and Oracle are trying to leverage a key element of the emerging Service Computing model: reliability. While the “unbreakable” claims have drawn ire from various security-oriented folks, the issue of “unbreakability” goes far beyond security issues as it relates to Service Computing. As more and more enterprise end users (and those on the consumer end as well, for that matter) come to rely on access to information that helps them plan, manage, and complete the tasks of everyday life, reliability and five-nines availability becomes something much more than a marketing promise: it becomes a necessity. Security lapses are one way to interrupt access to such data. But so are poorly designed networks, overloaded servers, human error, and poor utilization of resources. Oracle makes claims that its products will create an “unbreakable” environment. We would beg to differ, and not just on the issue of security. The infrastructure of Service Computing is not something that comes out of a box. Instead, it will require a broad range of network, hardware, processing, and application deployments to meet what we believe to be the irresistible trend to provide true returns on investment by allowing end users to get the information they need when they want it. As the value of Service Computing — to both end users and enterprise bean counters — becomes ever more pronounced and visible, the motivations to deploy Service Computing infrastructures will become not only irresistible, but truly “unbreakable.”

 

Airlines Eliminate Travel Agent Commissions: A Lesson for the Enterprise

By Jim Balderston

Continental and American Airlines joined Delta this week in moving to eliminate all commissions for airline tickets sold through travel agents. The commissions, which were set at 5%, had been reduced in 1995 from their historical 10% level. The move to eliminate commissions did not catch the travel industry by surprise; however, many agents apparently are waiting to see if there is a viable survival strategy vis-ŕ-vis the increasing influence of online travel booking sites. The airline industry lost nearly $7 billion last year, and said paying commissions and printing and distributing its tickets were its third largest cost, representing 8% of total costs.

While these developments are far from shocking, and have strong parallels in the stock brokerage industry, we believe there is an important lesson that needs to be well understood inside the enterprise. What has happened in the travel industry is that information that had been delivered only to the broker or agent is now delivered directly to the end user, eliminating – or at least severely reducing – the need and value imparted in the transaction by the agent or broker. A gatekeeper or information bottleneck has been removed between the people seeking the information and that information itself. Dare we dredge up that hoary old word, “disintermediation”? Oops, we did.

We are seeing in these events the effects and impacts of one of the primary goals and tenets of Service Computing: creating and maintaining constant and unimpeded access to information needed by end user constituencies. In many enterprises, there remains a huge cadre of information “brokers” or “agents” who have and control access to information that many other people within the enterprise require to do their jobs. These gatekeepers, in some cases, add little or no value to the information and act solely as the arbitrators of who has access to what information when. The coming of a Service Computing environment will make them as expendable as stock brokers and travel agents. We think it is crucial for both the buyers and sellers of the building blocks of the Service Computing model to understand the need to address – for lack of a better word – the political side of Service Computing. The adoption of Service Computing, which we believe to be an irresistible trend, will force wholesale disruptions within the enterprise and its classically defined hierarchies. Both buyers and sellers of Service Computing technology will need to think “beyond the box,” and examine how they are going to deal with the very human and organizational impacts of Service Computing. Ignoring these issues may undermine the very efficiencies that Service Computing offers by triggering the natural organizational tendency to self-perpetuation.


The Sageza Group, Inc.

836 W El Camino Real
Mountain View, CA 94040-2512
650·390·0700   fax 650·649·2302
London +44 (0) 20·7900·2819

Munich +49 (0) 89·4201·7144

sageza.com
info@sageza.com

Copyright © 2002 The Sageza Group, Inc.
May not be duplicated or retransmitted without written permission